perm filename FUNCEL.KMP[COM,LSP] blob sn#828443 filedate 1986-11-15 generic text, type T, neo UTF8
@Make(Report)

@center(@b[Issues of Separation in Function Cells and Value Cells])

@center(by Richard P. Gabriel and Kent M. Pitman)

@chapter(Introduction)

In 1981 the emerging Common Lisp [Steele 84] community turned to Scheme for some of
its motivation and inspiration. Adopting lexical scoping proved one of
the most important decisions the Common Lisp group ever made.

One aspect of Scheme which was left behind, however, was that of a
unified namespace for functions and values and accompanying uniform
evaluation rules for expressions in function and argument positions
within the language.  At the
recent ACM conference on Lisp and Functional Programming, members of the
European community involved in the design of a dialect called Eulisp
raised the issue that perhaps Common Lisp should have adopted this paradigm
and that perhaps it would still be appropriate to do so.

Many people in the Common Lisp community were not happy about the
proposed change. Technical, philosophical, and political arguments on
both sides of the issue became quickly apparent. Since this issue has
been seen to be very controversial, the Common Lisp community felt it
important to clearly document both sides in an unbiased way
before attempting to make any decision on the subject.

Scheme and Eulisp have chosen to adopt a unified namespace. Several questions
remain to be answered:
@Begin(Itemize, Spread 0)
Is it technically feasible for the Common Lisp community to change?

Is it technically desirable for the Common Lisp community to change?

Is it politically desirable for the Common Lisp community to change?

Will the Common Lisp community choose to change?
@End(Itemize)

To an extent, the issue to be decided may not simply be whether the
function and value namespaces in Common Lisp will merge, but perhaps
also a larger issue of whether any of the several several Lisp
communities -- Common Lisp, Scheme, and Eulisp -- can or should
eventually merge.

@chapter(Notation and Terminology) 

We will begin by establishing some standard terminology for use within
the context of this paper. The impatient reader may wish to skip this 
section and refer back to it only if he has a question about some term
that he does not understand.

A @b(function) is anything that may correctly be given to the @t(FUNCALL) or
@t(APPLY) function and is to be executed as code when arguments are supplied.
A @b(non-function) is any other lisp object.

An @b(identifier) is the name of a symbol or a variable; this is not 
an exclusive partition.  An identifier is essentially a print name.

A @b(symbol) is a Lisp data structure which may have a value cell, 
a function cell, a property list, and so on.

A @b(variable) is an identifier that names a location.

A @b(binding) is a pairing of an identifier with a location in which
location a Lisp object may be placed.

A @b(lexical variable) is a binding in which the identifier is not
taken to refer to a symbol.
A symbol with a value in its value cell is taken to be a binding in that
the name of the symbol is paired with the value cell of the symbol. 
A @b(special variable) is a binding in which the identifier @i(is) taken
to refer to a symbol. 

An @b(environment) is the set of all bindings in existence at some given time.
We will call a subset of an environment a @b(sub-environment).

A @b(namespace) is a sub-environment in which the nature of the objects to
which the location part of a binding may point are restricted to some
subset of all possible objects (not necessarily first-class Lisp objects).
In this white paper, there are two namespaces of concern, which we will
term the ``value namespace'' and the ``function namespace.'' Other 
namespaces include those for tag names (used by @t(TAGBODY) and @t(GO)) 
and those for block names (used by @t(BLOCK) and @t(RETURN-FROM)), but 
the objects in the location parts of their bindings are not first-class
Lisp objects.

The @b(value namespace) is a sub-environment whose location parts are not
restricted to point to any particular kind of object. The binding of
an identifier referring to a symbol along with a value cell is in the
value namespace.  Lexical variables, such as those introduced by @t(LET), 
@t(LAMBDA), and @t(MULTIPLE-VALUE-BIND), are in the value namespace.  

The @b(function namespace) is a sub-environment whose location parts 
are restricted to point to functions, except under error conditions.
The binding of an identifier referring to a symbol along with a 
function cell is in the function namespace. Functional lexical variables,
such as those introduced by @t(FLET), @t(LABELS), and @t(MACROLET), 
are in the function namespace.

Lisp's evaluation rules tell, given an expression and an environment, how
to produce a value or values and a new environment. In order to do this,
the meanings of identifiers in a program text need to be determined, and
this requires determining in which namespace to interpret the identifiers.
For example, a symbol can have something in its value cell and something
else in its function cell; which of these objects is referred to by the
symbol in a piece of code depends upon which namespace is defined to be
used for the context in which the symbol appears.

The Common Lisp specification defines that it is an error to place a
non-function in the function cell of a symbol. Since this is a discussion
of correct programs, it should be assumed in all objects placed in the
function cell of a symbol are valid functions unless specifically stated
otherwise.

The function and value namespaces are distinct in Common Lisp because,
given a single name, the function namespace mapping and the value
namespace mapping can yield distinct objects. The two mappings
@i(might) yield the same object, but that would be mere coincidence.

For the purposes of this document, we may wish to refer to two abstract 
dialects of Lisp, which we shall call Lisp@-[1] and Lisp@-[2].

@b(Lisp@-[1]) has a single namespace which serves a dual role as the function
namespace and value namespace. That is, its function namespace and value
namespace are not distinct. In Lisp@-[1], the functional position of a form
and an argument positions of forms are evaluated according to the same rules.
Scheme [Rees 86] and Eulisp [Padget 86] are Lisp@-[1] dialects.

@b(Lisp@-[2]) has distinct function and value namespaces. In Lisp@-[2], the
rules for evaluation in the functional position of a form are distinct from
those of evaluation in the argument positions of the form.
Common Lisp is a Lisp@-[2] dialect.

@chapter(Historical Perspective)

Historically, most Lisp dialects have adopted a two-namespace approach to
the naming problem. Largely this is because most dialects followed Lisp 1.5
[McCarthy 65] unless there was some interesting reason not to follow it.

Lisp 1.5 broke symbols into values and functions; values were stored on an
association list, and functions on the property lists of symbols.
Compiled and interpreted code worked differently from each other.  In the 
interpreter, the association list was where all bindings were kept.  When 
an identifier was encountered (an `atomic symbol' in Lisp 1.5 terminology), 
it was taken to be a variable to be evaluated for its value. First the 
@t(APVAL) part of the symbol was interrogated - an @t(APVAL) was ``@t(A)
@t(P)ermanent, system-defined @t(VAL)ue'' stored in a specific place in 
the symbol. Second, the association list was searched. Finally, if neither 
of these other two possibilities worked, an error was signalled.

When a combination was encountered, the function position was evaluated
differently from the other positions. First, the symbol was interrogated
to see whether there was a function definition associated with the symbol,
then the association list was searched.

Here we can see two namespaces at work, though non-symbol variables (Lisp
1.5 did not have lexical variables in interpreted code) were treated
somewhat uniformly: when there were no function definitions associated with
a symbol, there was one namespace, explicitly represented by the association 
list.

Compiled code worked a little differently, and from its internals
we can see where the two namespaces came about in descendants from
Lisp 1.5, at least conceptually.

The Lisp 1.5 compiler supported ``common'' and ``special'' variables. 
A common variable enabled compiled and interpreted code to communicate
with each other. A common variable was bound on an explicit association 
list, and to evaluate such a variable a call to EVAL was emitted to 
determine the value.  A special variable was the compiler's modeling 
of free variables and closely matched what is called today ``shallow 
binding.''  Ordinary variables were compiled into what we have termed
``lexical variables.''

Thus, we see all of the components of the two-namespace world in Lisp
1.5, along with some of the components of the one-namespace world.

Lisp for the PDP-6 at MIT adopted the style of the Lisp 1.5 special
variables for dynamic binding in both compiled and interpreted code,
eliminating common variables. Compilers were still written to try to
interpret special variables as lexical variables in as many places as
possible. The value of a symbol was stored in the special value cell of
the symbol and the function remained on the property list as it did in
Lisp 1.5.

@chapter(Technical Arguments)

@section(Notational Simplicity)

Many people believe that having the function position be evaluated
differently from the argument positions in Lisp@-[2] is very inelegant.
The reason that different evaluation rules are needed is because
there are different namespaces or environments for function bindings
and for value bindings. Therefore there is an evaluation rule for
each environment.  

The language is slightly more complicated in situations where we want 
to do either of the following two actions:
@Begin(Enumerate)
Fetch the value of an identifier in the value namespace and call it
as a function.

Fetch the value of an identifier in the function namespace pass it 
around as a value.
@End(Enumerate)

To use the value of an identifier in the value namespace as a function,
Lisp@-[2] provides this notation:
@Begin(Example)
(FUNCALL @i(identifier) . @i(arguments))
@End(Example)
For example, in Lisp@-[2] one might write:
@Begin(Example)
(DEFUN MAPC-1 (F L) (DOLIST (X L) (FUNCALL F X)))
@End(Example)
In Lisp@-[1], one would write:
@Begin(Example)
(DEFUN MAPC-1 (F L) (DOLIST (X L) (F X)))
@End(Example)

To use the value of an identifier in the function namespace as a
normal value, Lisp@-[2] provides this notation:
@Begin(Example)
(FUNCTION @i(identifier))
@End(Example)
which is often abbreviated as simply @t(#')@i(identifier).

For example, in Lisp@-[2] one might write:
@Begin(Example)
(MAPC #'PRINT '(A B C D))
@End(Example)
In Lisp@-[1], one would write:
@Begin(Example)
(MAPC PRINT '(A B C D))
@End(Example)

The differences are more striking in a larger, more complex example.

In Lisp@-[2], one could write the @t(Y) operator as:
@Begin(Example)
(DEFUN Y (F)
  ((LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G)) H)))
   (LAMBDA (G) #'(LAMBDA (H) (FUNCALL (FUNCALL F (FUNCALL G G)) H)))))
@End(Example)
while in Lisp@-[1], one can write:
@BEGIN(Example)
(DEFUN Y (F)
  ((LAMBDA (G) (LAMBDA (H) ((F (G G)) H)))
   (LAMBDA (G) (LAMBDA (H) ((F (G G)) H)))))
@END(Example)

The call to this operator in order to compute @t(5!) in Lisp@-[2] 
would look like:
@Begin(Example)
(FUNCALL (Y #'(LAMBDA (FN)
		#'(LAMBDA (X)
		    (IF (ZEROP X) 1 (* X (FUNCALL FN (- X 1))))))) 
	 6)
@End(Example)
In Lisp@-[1], the same call would look like:
@Begin(Example)
((Y (LAMBDA (FN)
      (LAMBDA (X)
	(IF (ZEROP X) 1 (* X (FN (- X 1)))))))
 6)
@End(Example)

Some argue that the Lisp@-[1] form is easier to read because it is more concise.
Others feel that the Lisp@-[2] form is easier to read because the use of functions
whose names are not constant are clearly marked.

It would not surprise us to find that the question of which is easier to
read is strongly correlated to the frequency with which a given programmer
actually passes functions as arguments or uses passed arguments functionally.
If this is an uncommon event, it is probably useful to have the incident
flagged visually. If this is a common event, the additional notation may 
quickly become cumbersome.

@section(Multiple Denotations for a Single Name)

Some people find it less confusing to have a single meaning for a name.
Fewer meanings mean less to remember.

For example, suppose a programmer has defined a function @t(F) as:
@Begin(Example)
(DEFUN F (X) (+ X 1))
@End(Example)
Then suppose he is writing a new function @t(G) and he wants it to
take a functional parameter @t(F) which it is to apply to its other argument.
Suppose he writes:
@Begin(Example)
(DEFUN G (F) (F 3))
@End(Example)
Issues of defined program semantics aside, it's probably obvious that the
programmer who wrote this piece of code meant to call the function named
by the formal parameter @t(F) on the argument @t(3). 
In Lisp@-[2], however, this function will ignore its argument named 
@t(F) and simply invoke the globally defined function named @t(F) on @t(3).
Notice, by the way, that this is precisely what Lisp 1.5 would have done.

Unfortunately, not all situations are as clear cut as this. For example, 
consider the following:
@Begin(Example)
(DEFUN PRINT-SQUARES (LIST)
  (DOLIST (ELEMENT LIST)
    (PRINT (LIST ELEMENT (EXPT ELEMENT 2)))))
@End(Example)
In this definition, there are three uses of the name @t(LIST). The first is in
the function's formal parameter list. The second is in initialization of the 
@t(DOLIST) variable. The third is in the @t(PRINT) expression. This program, 
which is valid in current Common Lisp, would not be valid in Lisp@-[1]
because the name LIST could not simultaneously denote a list of numbers and
a function. In Lisp@-[1], a common thing to write instead would be:
@Begin(Example)
(DEFUN PRINT-SQUARES (LST)
  (DOLIST (ELEMENT LST)
    (PRINT (LIST ELEMENT (EXPT ELEMENT 2)))))
@End(Example)

In the function @t(PRINT-SQUARES) above, the parameter named @t(LST) is
better named @t(LIST).

As should be clear from these examples, the advantage of treating the
function and argument positions the same is that using parameters as
functions is made more convenient syntactically.

The disadvantage is that @i(not) using parameters as functions is made
less convenient syntactically, because parameter names must be more
carefully chosen in order to not shadow the names of globally defined
functions which will be needed in the function body.

Of course, care in naming must already be observed in order to assure
that variable names chosen for some inner binding construct not shadow
the names of variables bound by outer binding constructs. For example,
consider:
@Begin(Example)
(DEFUN HACK-LIST (LIST)
  (LET ((LST (HACK-LIST LIST)))
    (HACK-SOME-MORE LIST LST)
    (SHUFFLE LIST LST)))
@End(Example)
Nevertheless, the degree of care required to avoid name collisions in
Lisp@-[1] is can be theoretically no less than that required in Lisp@-[2],
but has been statistically observed to be far more, a point to which we
will return later.

The following is a simple example of some of the important issues 
in variable naming:
@Begin(Example)
(DEFUN ODDITY (LIST) (LIST LIST LIST))
(ODDITY #'CONS)
@End(Example)
Depending on which way the issue is decided, the possible return values from
this function might be:
@Begin(Example)
(#<SUBR CONS> . #<SUBR CONS>)
(#<SUBR CONS> #<SUBR CONS>)
@End(Example)

@section(Referential Clarity)

In Lisp@-[2], without knowing the context, it is not possible to decide 
whether the function or the value namespace is the proper one to use.
These two forms result in different interpretations of an expression, @i(x):
@Begin(Itemize)
@t[(@i(x) ...)]

@t[(... @i(x) ...)]
@End(Itemize)
A basic ``rule'' of Lisp style is that code is clearest when 
the least amount of context is necessary to determine what each 
expression does. Unfortunately, that rule is violated in Lisp@-[2].

In a presentation at the 1986 ACM Conference on Lisp and Functional
Programming, Steele complained that the @g(a) operator in 
Connection Machine@+[(R)] Lisp [Steele 86], which is implemented as a readmacro,
was not possible to implement in the way he desired because of this
context problem. The problem is that the @g(a) operator needed to
expand differently depending on whether it was expanding in a 
functional or argument position. In Lisp@-[1], this problem would
not have arisen. However, the problem could be solved in Lisp@-[2]
by introducing ``lambda macros'' such as those which are already
used in Zetalisp [Symbolics 86b].

@section(Compiler Simplicity)

In current Common Lisp compilers, special case code is used when
deciding which namespace mapping to use when a variable is examined
by the compiler. 

The maintainers of some Common Lisp compilers claim that a change
from Lisp@-[2] to Lisp@-[1]  semantics would result in simpler, 
smaller, faster compilers. One reason for this is that knowledge 
about which namespace is in effect at any given time is often
procedurally embedded.  By merging the two namespaces, the same
pieces code can be used in more places, reducing the overall
number of places where such information is represented, and ultimately
making the maintenance task simpler.

The maintainers of other Common Lisp compilers claim, however, that
a change from Lisp@-[2] to Lisp@-[1] semantics would reduce the 
complexity of their compilers little if at all -- that it might
force small changes at points distributed throughout the system, 
but that overall the  compiler would not change very much.

There is some concern, however, that the overall complexity of 
compilers might increase as a result of such a change. This belief
is based in the observation that the change effectively does two
things: It throws away type information (specifically, the type
information which is implicit for the contents of the function cell). 
In some cases, this information may be recoverable, but the compiler
may have to perform a proof in order to do so. In other cases,
the compiler may be unable to attempt a proof due to problems which
reduce to the halting problem, and the compiler may have to taken
an unnecessarily conservative approach. 

To understand this point, consider a function such as:
@Begin(Example)
(DEFUN F (X) (G X))
@End(Example)
Even in this simple function, we are assuming that
@Begin(Example)
(SYMBOL-FUNCTION 'G)
@End(Example)
will contain a function. With compiler safety on, a good compiler 
must concern itself with issues of whether this cell will ever
contain a non-function. If it cannot prove that it will not, then
it must generate inefficient code. This can be made trivial to 
prove in Lisp@-[2] because it is legal to forbid non-functions
from ever being placed in the function cell. For example, VAX LISP [Digital 86]
does this. Consequently, the VAX LISP compiler can safely generate 
code which simply jumps to the contents of the value cell of @t[G].
In Lisp@-[1], however, a proof could be much more complicated.

The bottom line here is not that some compilers are better than 
others, but rather that compilers may vary widely in their nature,
and, as such, the effects of the proposed change may have widely
varied effects depending upon the implementation.

@section(Higher Order Functions)

While both functions such as @t[Y] above which directly manipulate 
other functions are possible to write in either Lisp@-[1] and Lisp@-[2],
many programmers feel that Lisp@-[1] allows one to write them more
perspicuously.  The point is that the more cumbersome notation of
Lisp@-[2] does nothing to encourage and may even discourage the 
writing of such functions.

@section(Abstraction Sharing)

In a single namespace Lisp, it is easier to define an abstract piece of
code that shares abstractions between data and functions. An example of
this is the abstract stream code in Chapter 4 of [Abelson 85], in
which it is shown how to write streams based either on functions or on data
structures.  Again, all of this is possible in Common Lisp as it stands,
but it is not an encouraged style. The problem is that it is a burden to
think about which namespace mapping will be in force for various
variables.

@section(Multiprocessing)

Functional programming style has been seen to be conducive to multiprocessing.
That is, the functional style of programming results in programs which are
more easily rendered into a parallel style. For evidence of this, look 
at typical Common Lisp programs and contrast its style and suitability for
parallelization with the same program as it might be written in Scheme.

By transitivity, then, since Lisp@-[1] tends to better encourage functional 
programming style, Lisp@-[1] is also more conducive to multiprocessing.

Of course, Common Lisp is not designed to accomodate multiprocessing and
it would take more than a unification of the function and value
namespaces to allow Common Lisp to support multiprocessing in any
serious way. At this time, integrated support of multiprocessing has not
been established as an explicit goal of Common Lisp. Nevertheless, it
seems apparent that experience with a more functional programming style
will provide a good foundation for programmers who later move to a
language which does support multiprocessing in an integrated way, so
this issue should not be overlooked.

@section(Number of Namespaces)

There are really a larger number of namespaces than just two being
discussed here. As we noted earlier, other namespaces include at least
those of blocks, tags, types, and declarations. As such, the names 
Lisp@-[1] and Lisp@-[2] which we have been using are slightly misleading. 
The names Lisp@-[5] and Lisp@-[6] might be more appropriate.

This being the case, the unification the function and value namespaces
do not accomplish as much as it might initially appear that they do.
Even with that change, the interpretation of a symbol in a Common Lisp
would still depend on the context to disambiguate variables from symbols 
from type names and so on.

On the other hand, some proponents of the change have suggested that,
in time, these other namespaces would be collapsed as well. Dialects of
Scheme have done this -- some to a greater extent than others. 

In fact, however, because of the existence of functions like @t(GET), 
@t(ASSOC), and @t(GETHASH) which allow users to effectively associate 
new kinds of information with symbols. The fact that this does not 
affect the complexity of the compiler is more a statement about the 
level of understanding that compilers have than a statement about the 
abstract effect. The truth is that these additional meanings which can 
be associated with symbols can and do have a very powerful effect.

Indeed, much of the power of associative functions like @t(GET) derives 
from what amounts to a structured kind of pun -- the fact that a single
symbol (or any object, for that matter) may have more than one kind of
information usefully associated with it. The power and importance of 
this kind of structured interplay between arbitrary namespaces is hard
to deny and probably does not warrant the level of disdain which is 
sometimes given it by Scheme enthusiasts.

@section(Macros and Name Collisions)

Macros as they exist in Common Lisp are very close to being semantically
bankrupt. The problem is that they expand into an expression which is
composed of symbols which have no attached semantics. When
substituted back into the program, a macro expansion could conceivably
take on a quite surprising meaning depending on the local environment.

Some symbols which ultimately appear in the expansion of a macro are
obtained by macro definition through its parameter list from the macro
consumer. It is therefore possible to use those symbols safely. However,
writers of macros often work on the hypothesis that additional
functional variables may be referenced in macros as if they were globally
constant. Consider the following macro definition:
@Begin(Example)
(DEFMACRO MAKE-FOO (THINGS) `(LIST 'FOO ,THINGS))
@End(Example)
Here @t(FOO) is quoted, @t(THINGS) is taken from the parameter list for
the macro, but @t(LIST) is free. The writer of this macro definition is
almost certainly assuming either that @t(LIST) is locally bound in the
calling environment  and is trying to refer to that locally bound name,
or else that @t(LIST) is to be treated as constant and that the author of
the code in the calling environment knew that he should not locally bind
@t(LIST). In practice, the latter assumption is almost always made.

If the consumer of the above macro definition writes
@Begin(Example)
(DEFUN FOO (LIST) (MAKE-FOO (CAR LIST)))
@End(Example)
in Lisp@-[1], it is likely that there will be a bug in the code.

Here is another example of code that would be a problem in Lisp@-[1]:
@Begin(Example)
(DEFMACRO FIRST (LIST) `(CAR ,LIST))
(DEFMACRO TEST-CAR (CAR TEST-LIST)
  "The `driver' program for Frobozz Automobile, Inc.'s quality assurance test."
  (DO ((TESTS TEST-LIST (REST TESTS)))
      ((NULL TESTS))
    (FUNCALL (FIRST TESTS) CAR)))
@End(Example)

In some Scheme implementations it is possible to write the following:

@Begin(Example)
(DEFMACRO FIRST (LIST) `(',CAR ,LIST))
@End(Example)

This is syntactically more tedious but has the obvious advantage that there
is no potential for name confusion. Nevertheless, something like this doesn't
instantly answer all technical problems.  When uses of this macro are compiled
to a file, an object such as @t(#<COMPILED-FUNCTION CAR>) must be output in
a way that will allow it to be correctly restored later. Even Lisp and Scheme
implementations which claim to allow this are likely to have difficulty in the
situation where the compiled function has no toplevel name which can easily be
accessed at load time. For example, consider the following, more complex
macro definition:

@Begin(Example)
(DEFMACRO FOO (N X) `(',(LAMBDA (X) (+ X N)) ,X))
(FOO 3 Z)
@End(Example)

For each use of this macro there may be a separate copy of the
compiled code for the anonymous function in the macro definition.
Also, in addition to writing the compiled code to a file, all
procedures necessary to support it must also be correctly saved
and restored by the file compiler.

Some systems are able to collapse structurally equal compiled-code
objects into only one copy when the occurrences of them are all in a
single file, but this requires a lot of work and doesn't even address
the issue of what happens when the equal code objects must be written
to different files.

Another problem for Lisp implementors is that when this approach is
used, the compiler ought to open-code the original @t(CAR) operation
in the @t(TEST-CAR) code rather than to code a function call to @t(CAR).
To do this, the compiler must be made to understand a new set of idioms.
Open coding these idioms will help in a few cases where the function
would then not have to be written to a file, but will not help stave
off the general problem, which is that open coding is not always 
desirable either for reasons of space efficiency or modularity.

A proposed solution to the macro problem is presented in [Kohlbecker 86],
where the author asserts that macros can be written in a style that
is like the current Common Lisp style, but free function position
variables can be defaultly taken to be the globally defined function of
the same name.

Another partial solution might be to include a new declaration form which
declares function definitions constant so that Common Lisp implementations
could declare all Common Lisp functions constant. The correct behavior of
a Lisp@-[1] with the built-in functions declared constant would be to 
signal an error when cases like @t(TEST-CAR) above occur. However, functions
without this declaration would not receive the benefit of this check and
macros which need to use them would continue to be error-prone. Also, it
would be necessary for users to know which names in the language were 
constant even if they did not plan to use them for their intended use.
To such users, these names would appear to be holes in the space of 
available names. For example, an automobile manufacturer would likely prefer
to use the name @t(FIRST) rather than @t(CAR) to access the first element of
a list exactly because he would want to recycle the name @t(CAR) for some
other purpose. Packages can be used to alleviate some of these problems, but
many view packages as fairly clumsy in their own right.

It's worth emphasizing, however, that these problems come up in 
either Lisp@-[1] or Lisp@-[2]. The issue is really just that they are
statistically more likely in Lisp@-[1] since there is less contextual 
information available to help out. Here is an example of how the problem
arises in normal Common Lisp:

@Begin(Example)
(DEFMACRO FOO (X Y) `(CONS 'FOO (CONS ,X (CONS ,Y NIL))))

(DEFUN BAZ (X Y)
 (FLET ((CONS (X Y) (CONS Y X)))
  (FOO X Y)))
@End(Example)

@t[(BAZ 1 2)] returns @t[(((NIL . 2) . 1) . FOO)], even though it seems that
@t[(FOO 1 2)] might have been intended by the programmer.

Although few (if any) implementations support its full generality in file
compilation, a strict reading of the Common Lisp specification seems to
imply that it should be acceptable to write:

@Begin(Example)
(DEFMACRO FOO (X Y) ;take a deep breath
  `(FUNCALL ',#'CONS 'FOO (FUNCALL ',#'CONS ,X (FUNCALL ',#'CONS ,Y NIL))))

(DEFUN BAZ (X Y)
  (FLET ((CONS (X Y) (CONS Y X)))
    (FOO X Y)))
@End(Example)

Here @t[(BAZ 1 2)] should evaluate to @t[(FOO 1 2)], just as everyone (hopefully)
expected.

Given all of this, the thoughtful reader should ask: why do macros appear
work as often as they do?

The answer seems to be based primarily in history and statistics and not
in some theoretical foundation! In recent dialects preceding Common
Lisp, such as Maclisp [Pitman 83], it was fortunately true that there
was no @t(FLET), @t(LABELS), or @t(MACROLET). This meant that there was
an extremely high likelihood that the function bindings of identifiers 
in the macro expander's environment would be compatible with the function 
bindings of the same identifiers in the program environment. Coupled with
the fact that the only free references which most macro expansions tend 
to make are functional, this meant that writers of macros could guess
enough information about how the expansion would be understood, that fairly
reliable macro packages could be developed.

With the advent of @t(FLET), @t(LABELS), and @t(MACROLET), the risk of
conflict is considerably higher.  The Scheme community, which has long
had constructs with power equivalent to that of forms @t(FLET), has
never adopted a macro facility into the language. This is because, 
among other things, macros have generally seemed like a semantically
bankrupt concept to many of the Scheme designers. This data is again
supportive of the argument that Lisp@-[1] dialects are more prone to 
name collisions than Lisp@-[2] dialects.

The main reason why we have not seen huge numbers of problems in 
Common Lisp to date may well be that most Common Lisp programmers are
still programming using a Maclisp programming style and using forms
like @t(FLET) in only very limited ways. Those groups which do use
@t(FLET) and @t(LABELS) heavily may well be composed of individuals who
learned Lisp with Scheme and do not use macros heavily. 

It should not be surprising to find increasingly many name collision
problems reported by users of Common Lisp macros as time progresses.
A change from Lisp@-[2] to Lisp@-[1] semantics for identifiers would
very likely hasten the process.

@section(Space Efficiency)

If a symbol is used both for its value and as a function, it currently costs no
additional space. Any program which has symbols which are used to denote 
distinct functions and values, however, would have to be changed. In general,
this means that some new symbols would be introduced. In most cases, the number
of new symbols introduced would not be extremely large, but there might be
pathological applications where there were exceptions. In the Lucid Lisp system [Lucid 86],
there are 14 of these symbols, and the value cell is being used, in these cases,
as a cache for an object related to the function. In the MACSYMA system [Symbolics 86a]
there are roughly 35 of 10,000 symbols.

Using the same name to refer to both a function and a value cell can be more
space efficient since it means adding only one additional cell to an existing
data structure that already has on the order of 5 to 10 cells anyway.

This issue can be put quantitatively.  Let @i(N) be the number of symbols in
a system, let @i(S@-[2]) be the space occupied by the average symbol in an
implementation of Lisp@-[2], let @i(S@-[1]) be the space occupied
by the average symbol in an implementation of Common Lisp as modified by
merging the value and function environments, and let @i(X) be the number of
symbols that must be added to a system to resolve name conflicts.  Then
the space saved by having separate environments is
@Begin(Example)
((@i(N) + @i(X)) * @i(S@-[1])) - (@i(N) * @i(S@-[2]))
@End(Example)

For example, if @i(N) is 8000, @i(X) is 14, @i(S@-[1]) is 28 (bytes), and 
@i(S@-[2]) is 32 (bytes), then the space saved by Lisp@-[2] is -31608 (bytes).  
That is, 12% of the symbol space used by such a Lisp@-[2] implementation 
might be saved if it were made to be a Lisp@-[1] implementation instead.
In order for there to be no net change in the amount of storage between
a two-namespace and a one-namespace Lisp, one would need over 1100 symbols
to be added to the system to resolve name conflicts.

This issue is not likely to be a major point of contention.

@section(Time Efficiency)

In Lisp@-[2], a function call to a function associated with a symbol
involves indirecting through the symbol's function cell. A typical
implementation on stock hardware will look at the symbol's function cell,
which points to a piece of code, possibly with an intermediate pointer
through a procedure object, as in S-1 Lisp [Brooks 82]. An optimization to this is for
a function call to jump directly to the correct code object, perhaps
indirecting through a link table of some sort, but eliminating the
indirection through the symbol's function cell.  In order for @t(DEFUN) and
@t[(SETF (SYMBOL-FUNCTION...)...)]  to cause existing running code to work,
the operation of changing a symbol's function cell will invalidate the
link table or otherwise cause the correct new link to be made.

To use this same optimization in a single namespace Lisp, @t(SETQ), rather
than @t[(SETF (SYMBOL-FUNCTION...)...)] must do the invalidating or
re-linking.  The common case is that there is not a function associated
with a symbol - programmers do not often write code that changes function
definitions in inner loops - and so a flag stating there is no function
definition involved will need to be checked on each @t(SETQ).

Of course, only an assignment to a symbol need to be checked. A test and 
branch added to @t(SETQ), where @t(SETQ) might have been a single 
instruction before. On some stock hardware, tricks with the addressing
hardware and word alignment can be played to make this fast in the 
non-functional value case, but in fact it seems unlikely that this could
cause any more than a 10% degradation in the most pessimistic inner loop,
and overall it is unlikely to cause a very noticeable degradation in any 
large system.

Of course, using this linking paradigm to gain speed does incur some 
storage overhead for the link table, possibly giving back a noticeable
fraction of the storage which we might have seemed to save in the 
discussion of space efficiency above. Also, the resulting code might 
be slightly larger in places due to the need for extra tests when 
assigning special variables.

Also, this issue is an illustration of the earlier claim that
simplifying the surface language might not always result in a simpler
compilation strategy since this is one of several ways in which the compiler
might be forced to be more complicated as a result of such a change.

@section(Special Variables)

If Common Lisp became a Lisp@-[1] dialect, we would need to create a conceptual
distinction between global variables which were lexical and global varibles
which were special. Consider the following definitions:
@Begin(Example)
(DEFUN PLUS (X Y) (+ X Y))
(DEFUN SOMEWHAT-MORE-THAN (X) (PLUS X *SOMEWHAT*))
@End(Example)
In Lisp@-[1], there are @i(two) free variable references in the defintion of
@t(SOMEWHAT-MORE-THAN) -- @t(*SOMEWHAT*) and @t(PLUS). In fact, the definition
of @t(PLUS) even references a free variable, which is @t(+).

To avoid a compiler warning for the free use of variables such as @t(*SOMEWHAT*),
it is common in Common Lisp to declare such variables @t(SPECIAL). In order to
have compilers not warn about the free use of @t(PLUS), it would be unfortunate 
for these to have to be declared @t(SPECIAL).  Instead, it might be necessary
to introduce a new declaration which made a variable (lexically) @t(GLOBAL) but not 
@t(SPECIAL). @t(DEFUN) would presumably be made to do this declaration implicitly
for the name of the function being defined.

Given such a declaration, it would still be possible in a Lisp@-[1] to write
definitions such as:
@Begin(Example)
(DEFUN ZAP (FN X Y)
  (LET ((PLUS (LAMBDA (X Y) (MAPCAR PLUS X Y))))
    (LIST (PLUS (FN X) (FN Y)) (PLUS (FN (FN X)) (FN (FN Y))))))
@End(Example)
without worrying that the binding of @t(PLUS) would affect the argument @t(FN).

Alternatively, we could say that this new declaration was not a good
idea and just assert that globally defined functions are constant and
that it is illegal to bind their names.

The new @t(GLOBAL) declaration is more flexible, but making existing 
implementations understand it correctly would involve more work for 
implementors. Making all functions be defined to be constant would probably
be less work for implementors, but would be less flexible for users.

Closely related to this is that there is currently no system-provided
dymamic variation of @t(FLET) and @t(LABELS) in Common Lisp (although in
a non-multiprocessing environment they can be more-or-less simulated by
creative use of @t(UNWIND-PROTECT)). If Common Lisp were made a
Lisp@-[1] dialect, dynamic functional variables would be something it
would get ``for free.'' And if their use became popular, it might be
desirable to have two kinds @t(DEFUN) -- one which created special
definitions and another which created lexical definitions.

@section(Compatibility Issues)

A transition from Lisp@-[2] semantics to Lisp@-[1] semantics would
introduce a considerable amount of incompatibility. There is the
question of implementor problems as well as user problems.

@subsection(Changing existing code)

Large bodies of code already exist that make assumptions about the current
semantics. That code would all have to be changed. Users who did not favor
this change would likely resent the amount of work required to make the 
change, which might be non-trivial.

In some cases, mechanical techniques could diagnose which programs needed
to be changed. However, because of the pervasive use of macros and of 
automatic programming techniques, it would not be possible to do such
diagnosis with 100% reliability in any automatic way.

Compilers could be modified to provide the user information about conflicts
as they come up as a sort of machine-gun approach to the problem. This would
address some problems in automatic programming that could not be detected by
statically examing the code which does such programming.

However, some situations are still more complicated because they do not
directly produce code. Instead, they examine and modify code. For example,
compilers, translators, macros, and code walking utilities may have built-in
assumptions about the number of namespaces which are never explicitly 
represented and which therefore elude automatic techniques, possibly leading
to errors or inefficiencies later on.

@subsection(Compatibility packages)

Various compatibility schemes have been proposed which claim to allow these
problems to be eliminated.  For example, we might have a single Common Lisp
with Lisp@-[1] semantics with a compiler switch that allows Lisp@-[2] code 
to be compiled.  Symbols would have function cells, but possibly represented 
as properties on property lists. All old Common Lisp code of the form:

@Begin(Example)
(F ...)
@End(Example)

would be transformed to this:

@Begin(Example)
(FUNCALL #'F ...)
@End(Example)

where @t(FUNCTION) would look things up in the `function cell.' @t(FUNCALL) would
be retained in the compatibility package. A bigger example is more convincing:

@Begin(Example)
(LET ((LIST ...))
 (LIST ...))
@End(Example)

becomes

@Begin(Example)
(LET ((LIST ...))
 (FUNCALL #'LIST ...))
@End(Example)

Such a Common Lisp variation would presumably retain @t(LABELS), possibly
renamed @t(LETREC) for compatibility with Scheme. @t(LET) would be used
in place of @t(FLET).  During the transformation process, variables bound
by occurrences of @t(FLET) and @t(LABELS) in the old code would be renamed
to new names produced by @t(GENSYM), and the value namespace versions of
@t(FLET) and @t(LABELS) would be substituted for the function namespace 
versions.

Possibly some compilers already perform this transformation internally and
will be simplified after the change. And perhaps an implementor will want
to provide a real function cell for this compatibility in order to run old
code relatively fast. Lisps that normally have link tables will need to
provide separate linking code (possibly the old link code) for the
compatibility package.

Unfortunately, there are several problems with this kind of compatibility.

First, it does not lend itself neatly to mixing compiled and interpreted
code, especially when such code is produced at runtime. It becomes
important, for example, to clearly document and control whether
functions which receive expressions to be evaluated or code-walked or
whatever are receiving expressions in Lisp@-[1] or Lisp@-[2]. Since this
information is not likely to have been explicitly represented in the
data flow of the original Lisp@-[2] program, and since our compatibility
scheme does not introduce new data flow to carry such information,
confusion on this point is inevitable.

An analagous problem exists in the Symbolics Lisp environment,
where there are two kinds of @t(FORMAT) strings for ``compatibility''
reasons -- Zetalisp @t(FORMAT) strings and Common Lisp @t(FORMAT) strings.
Zetalisp's @t(FORMAT) treats certain format ops differently
than does Common Lisp @t(FORMAT), and as a result the compatibility is
far from transparent. Programs which accept format strings must clearly
document whether they should be ZL format strings or CL format strings, and
if they call @t(FORMAT) in more than one place they must be sure to be 
consistent about the usage. It should be easy to see how this same kind of
problem would arise with any code that called @t(APPLY), @t(EVAL), 
or @t(COMPILE) or even in some situations involving implicit uses 
of those facilities such as macro definitions.

Also, the compatibility package could expand expressions into code which
was opaque to certain code walkers, compilers, and macro facilities,
which might have made closed-world assumptions about the kinds of
expressions that were likely to come up in a particular application and
hence the kinds of theorems that it would be necessary to prove in order
to accomplish correct expansion. The reason that this might occur is that
the proposed translation may involve treating some forms which were documented
as special forms as if they were macros. This would mean that code analysis
routines which were expecting to see a certain class of expressions would in
fact see a different class of expressions that it didn't realize could occur
based on a given class of user input.

Finally, some programs may function correctly but suffer an efficiency loss
which is greater than the simple loss which might be assumed by just analyzing
the theoretical speed of the compatibility code. For example, suppose a macro
would be capable of performing an interesting optimization if it could prove
that a certain side-effect within a certain range of code. The truth is that
the abstract nature of an expression like 
@Begin(Example)
(SYMBOL-FUNCTION @i(symbol))
@End(Example)
makes it easier to reason about than an expression like
@Begin(Example)
(GET @i(symbol) 'SYMBOL-FUNCTION)
@End(Example)
since proofs about how the designated cell will be used are likely to be easier.
For example, to determine whether the former slot is modified, you might have
to seek expressions that look like:
@Begin(Example)
(SETF (SYMBOL-FUNCTION @i(symbol)) @i(function))
@End(Example)
On the other hand, because the @t(GET) form has given up its abstraction,
any of the following could be reason to be nervous about a potential 
side-effect:
@Begin(Example)
(SETF (CADR X) Y) ;Here X holds a pointer into @i(symbol)'s property list
(SETF (GET J K) L)
(SETF (GET Q 'SYMBOL-FUNCTION) R)
(SETF (GET 'FOO B) C)
(SETF (SYMBOL-PLIST M) Q)
@End(Example)

@section(Other Changes to Common Lisp to Accomodate the Merger)

A free variable in Common Lisp is taken to be a dynamic rather than
a lexical reference. This is because there is no global lexical environment
in Common Lisp. If this is the code

@BEGIN(Example)
(DEFUN FOO (X)(+ X Y))
@END(Example)

the reference to @t(Y) is special (dynamic). On the surface, in a
single-namespace Lisp, the reference to `+' is free also, and so it is
special (dynamic). One proposed solution is to make the default be lexical
(global), which makes @t(Y) refer to the global value for @t(Y), and `@t(+)' to the
global definition of `@t(+)'.

Thus, there would be a global lexical environment in which symbols
that are used freely would be accessed. Currently there is a global
dynamic environment.

This further change would require users to modify their code, the
change being to, possibly, add some @t(SPECIAL) declarations.

[RPG: The issue is more complicated than this. I'll be expanding this.]

@chapter(Non-Technical Arguments)

@section(The Scheme Community)

With the advent of Common Lisp, the Lisp and Scheme communities have
considerably more overlap than they have ever had in the past.

Compatibility with Scheme is not currently a goal of the Common Lisp
design committee. On the other hand, if all other things were equal, there
would be no reason for us to seek to be gratuitously incompatible with
Scheme.

If Common Lisp were to become a Lisp@-[1] dialect, it might prove
possible to develop a layered definition in which Scheme is the base 
layer and Common Lisp is the ``industrial strength'' layer. Scheme
might then be a natural subset for use on smaller computers and 
embedded systems.

Of course, the Scheme community would need to be convinced that this
layering is desirable. Though the entire Scheme community has not been
involved in any discussions of the issue, some of its members have
expressed interest in the idea. Without the full support of the Scheme
community, there is no guarantee that Scheme will not later change in an
incompatible way that will coincidentally thwart such a good faith move
on the part of the Common Lisp community. Although Scheme was originally
formed to accomodate a particular set of design features which its
authors found interesting [Sussman 75], it has survived partially as a
vehicle for experimentation in more ``purist'' features which might
directly conflict with the practical needs of an ``industrial strength''
Lisp. In [Rees 86], for example, it is explicitly stated that Common
Lisp compatibility is not uppermost in the list of concerns of the
Scheme community.  A formal merger which seems appropriate at the moment
could later turn sour.

If we did become compatible, we would have to be careful to not constrain
the Scheme community to stick too closely to the particular definition 
of Scheme which would be this fundamental layer. 

On the other hand, the Scheme designers believe that Common Lisp has a
number of bad warts - two namespaces being foremost - and that, therefore,
compatibility is not as important as it would be if the foundations of
each dialect were more closely akin. With the willingness of the Common
Lisp community to `see the light,' perhaps the willingness to remain
compatible with the newer, merged dialects, would be greater.

If compatibility with Scheme is desired, there are other changes to Common
Lisp that are necessary or desirable. Foremost are tail recursive
semantics and @t(CALL-WITH-CURRENT-CONTINUATION), in which first-class
continuations are supported. This change would be completely upwards
compatible and is not conceptually hard to implement; but, there are many
details and it is not easy to implement.

@section(The Eulisp Community)

A new community of Lisp users has emerged in Europe, rallying around a
dialect of Lisp which they call Eulisp. The EuLisp group is endeavoring 
to define a new standard for Lisp -- possibly to be proposed to the ISO
community as a standard for Lisp. 

Their approach seems to be to take lessons primarily from the past rather 
than to try to learn from the designs of contemporary dialects of Lisp.
Rather than building on existing specifications, this group is starting
anew, with all the concomitant social pressures of developing a standard
using a slightly too large working group. Key member of this group are
Jerome Chailloux, Julian Padget, John Fitch, Herbert Stoyan, Giuseppe 
Attardi, and Jeff Dalton.

They wish to propose a 3 level standard. At the lowest level, level 0,
is the minimal Lisp, possibly suitable for proving properties of Lisp
programs. The second level, level 1, is of the size and extent of Scheme
and is intended as the right size for small personal computers.  
The highest level, level 2, is about of the size and extent of Common Lisp.
It is intended as the industrial strength Lisp with a number of as-of-now
unspecified environmental features.

The key differences between EuLisp as it now stands and Common Lisp include
a unified variable namespace, as in Lisp@-[1] and simplified lambda-lists
(which do not allow @t(&OPTIONAL) or @t(&KEY)).

At one face-to-face meeting, Chailloux and Padget stated that if Common
Lisp were to collapse the two namespaces, they would be willing to adopt
fancy lambda-lists. These two concessions are possibly enough to lay the
groundwork for a widespread set of compromises, mostly from them, on the
merger of the two communities.

Without this technical merger, a messy political battle could ensue.

@section(The Japanese Community)

The Japanese community seems to want to stick with Common Lisp pretty much
as it is today, according to reports from Prof. Ida. It appears that there
is a heavy commitment to the current definition in the commercial marketplace
there. The political issue seems to be that we will have a battle at the ISO
level regardless of the decision we make.

@section(The Community of Small Computer Users)

Owners of small computers who would like to run Lisp are effectively 
forced to run Scheme since implementations of Common Lisp for small 
computers are very hard to find.

This makes it appear as if Common Lisp were not designed to meet
the commercial needs of small computer vendors, because most of the 
commerical world uses small computers and hardly any serious Common Lisp
implementations have been attempted even in the face of a potentially
large market.

If Scheme or something like it could be incorporated as a subset or
as a fundamental lower level, it might be possible to make an 
interesting subset of Common Lisp available for users of these machines
in a useful way.

On the other hand, the average size of a `small' computer is increasing 
at a brisk pace. By the time any standard is finally approved by ANSI, 
the cost of supporting a ``level 3'' Common Lisp may not be prohibitive
even on `small' computers.

@section(The Lisp Community is Changing its Tune)

The Lisp vendors have convinced the commercial world that the Lisp
community has decided to get its act together and settle on one dialect of
Lisp -- Common Lisp -- and that it is time to start writing commercial
products using that dialect. If the Lisp community now changes Common Lisp
in such an incompatible way, the newly-convinced commercial world
might balk.

It is possible, of course, that an X3 sanction of such a change as
part of not only a US standard Lisp, but an international standard 
Lisp, would make it easier for vendors to accept such a change. But 
preliminary feedback from a number of large vendors and large consumers
suggests that some people consider there to already be an informal 
standard which they would be dismayed to see changed.

@section(Can Vendors Afford the Merger?)

Most Lisp vendors have their hands full improving and honing their
products. Typically these vendors schedule tasks 6 months to a year ahead.
At the very least, if the merger takes n person-months to accomplish for
some vendor, there are n person-months of something else very useful that
does not get done.  If the task of the merger is on the order to of
several person-years for a vendor, that vendor might not survive the
change.  If there are such vendors, then the legal protections of CBEMA
for members will be tested. 

In some cases, vendors may just decide not to support Common Lisp
and to go their own way if they feel that the changes make it no longer
an attractive or economical item to support.

Gabriel has estimated that at Lucid about 2 person-months are needed to
make the change initially followed by 4 person-months of shakedown.

Current estimates are that such change to the Symbolics lisp environment 
would take much longer. Allowing proper time for quality assurance
and customer transition including backward compatibility, the process
could drag out for years. Symbolics does not believe that compatibility 
schemes such as those touched on in this paper are likely to offer any
significant aid in this process.

@section(Can Users Afford the Merger?)

Many users have a lot of Common Lisp code now. Symbolics users have just
undergone a major switchover from Zetalisp to Common Lisp. Even though
many of the changes are acknowledged by customers to be in their long
term best interest, they have found the transition to be painful. Part 
of the reason is that they were convinced that the change was worthwhile
by arguments that Common Lisp would remain fairly stable. If we pull the
rug out from under them with another major change, they may get pretty
nervous.

In the days of informally supported Lisps being distributed by Universities,
we might have considered simply having a flag day after which things would
behave differently and have everyone simply agree to take the headache 
for the sake of some nebulous general good. Now that customers have invested
large amounts of money and perhaps entire businesses in products that depend
on the exact workings of Common Lisp, it is not likely to be so easy to
convince them to change.

Again, perhaps X3 can help by conditioning users for the switchover, and
perhaps DARPA can pay for tools to help users; for example, a codewalker
could be written that could used to parse files and to report places in
the code that require attention. However, for some customers, even this
might be too much work, too great an expense, or too impractical, too
impractical in light of existing product distribution schedules.

@chapter(Summary)

There are compelling technical and non-technical reasons on both sides
of the issue. This white paper is an attempt to outline them so that
an informed decision can be reached.

@center(@i[Acknowledgments])

The authors gratefully acknowledge useful commentary and support of 
Scott Fahlman and Will Clinger.

@center(@i[References])

[Abelson 85] H. Abelson and G.J. Sussman with J. Sussman,
@bi(Structure and Interpretation of Computer Programs), The MIT Press,
Cambridge, Massachusetts, 1985.

[Brooks 82] R.A. Brooks, R.P. Gabriel, and G.L. Steele, Jr., @bi(S-1 Common Lisp
Implementation), Proceedings of the 1982 ACM Symposium on Lisp and Functional 
Programming, Pittsburgh, PA, August 1982.

[Digital 86] Digital Equipment Corporation, @bi(VAX LISP/VMS User's Guide),
Maynard, MA, 1986.

[Kohlbecker 86] E.E. Kohlbecker, Jr., @bi(Syntactic Extensions in the 
Programming Language Lisp), PhD thesis, Indiana University, August 1986.

[Lucid 86] Lucid, Inc., @bi(Lucid Common Lisp Reference Manual for the VAX),
Menlo Park, CA, 1986.

[McCarthy 65] J. McCarthy, et al, @bi(Lisp 1.5 Programmer's Manual),
The MIT Press, Cambridge, Massachusetts, 1965.

[Padget 86] J. Padget, et al, @b(``Desiderata for the standardisation
of LISP,'') @i(Proceedings of the 1986 ACM Conference on Lisp and Functional 
Programming), Cambridge, MA, August 1986.

[Pitman 83] K.M. Pitman, @bi(The Revised Maclisp Manual) (Saturday Evening Edition),
LCS Technical Report 295, MIT, May 1983.

[Rees 86] J. Rees and W. Clinger, editors, @b(``Revised↑3 Report on the
Algorithmic Language Scheme,'') @i(SIGPLAN Notices) 21(12), September 1986.

[Steele 84] G.L. Steele, Jr., @bi(Common Lisp, The Language), 
Digital Press, Billerica, Massachusetts, 1984.

[Steele 86] G.L. Steele, Jr. and W.D. Hillis, @b(``Connection Machine@+[(R)]
Lisp: Fine-Grained Parallel Symbolic Processing,'') @i(Proceedings of the
1986 ACM Conference on Lisp and Functional Programming), Cambridge, MA, August 1986.

[Sussman 75] G.J. Sussman and G.L. Steele, Jr., @b(``SCHEME: An Interpreter
for Extended Lambda Calculus,'') AI Memo 349, MIT, Cambridge, MA, December 1975.

[Symbolics 86a] Symbolics, Inc., @bi(MACSYMA Reference Manual), Cambridge, MA, 1986.

[Symbolics 86b] Symbolics, Inc., @bi(Symbolics Common Lisp: Language Concepts),
@i(Encyclopedia Symbolica), Volume 2A, pp296-7, Cambridge, MA 1986.